最近的深度学习技术和精心设计的DEBIA算法,公正的排名学习(ULTR)问题已大大提高。但是,由于从那些流行的基准数据集中观察到的以下缺点,因此现有基准数据集的有希望的结果可能不会扩展到实际情况:(1)过时的语义功能提取,其中最先进的大规模预训练的预培训的语言由于原始文本的缺失,无法利用像伯特这样的模型;(2)不完整的显示功能,用于深入研究Ultr,例如,缺少显示的文档的摘要,用于分析单击必要的偏见; (3)缺乏现实世界的用户反馈,导致经验研究中合成数据集的普遍性。为了克服上述缺点,我们介绍了Baidu-ultr数据集。它涉及随机采样12亿次搜索会议和7,008个专家注释的查询,该查询比现有的搜索范围大。 Baidu-ultr提供:(1)原始语义功能和一个预先训练的语言模型,以方便使用; (2)足够的显示信息,例如位置,显示高度并显示了抽象,从而可以全面研究具有先进技术的不同偏见,例如因果发现和元学习; (3)搜索结果页面(SERP)等丰富的用户反馈,例如住宅时间,允许用户参与优化并促进ULTR中多任务学习的探索。在本文中,我们介绍了Baidu-Ultr的设计原理以及在此新数据资源上的基准超级算法的性能,有利于探索长尾查询和排名预培训任务的排名。 BAIDU-ULTR数据集和相应的基线实现可在https://github.com/chuxiaokai/baidu_ultr_dataset上获得。
translated by 谷歌翻译
除局部相关性外,开放域的Factoid问题回答的段落排名还需要一个段落以包含答案(答案)。尽管最近的一些研究将一些阅读能力纳入了排名者以说明答复性,但排名仍然受到该领域通常可用的训练数据的嘈杂性质的阻碍,这将考虑任何包含答案实体作为正样本的段落。但是,段落中的答案实体不一定与给定的问题有关。为了解决该问题,我们提出了一种基于生成对抗性神经网络的通道重新管理的方法,称为\ ttt {pregan},除了局部相关性外,还结合了关于答复性的歧视者。目的是强迫发电机对局部相关的段落进行排名,并包含答案。五个公共数据集的实验表明,\ ttt {pregan}可以更好地对适当的段落进行排名,从而提高质量检查系统的有效性,并在不使用外部数据的情况下优于现有方法。
translated by 谷歌翻译
单击后键盘转换为指示用户偏好的强信号,是构建推荐系统的良性。但是,由于选择偏差,即,观察到的单击事件通常会发生在用户的首选项目,准确地估计点击后击率(CVR)是具有挑战性的。目前,大多数现有方法利用反事实学习到Debias推荐系统。其中,双重稳健(DR)估计器通过以双重稳健的方式组合基于误差估算的(EIB)估计和逆倾向分数(IPS)估计来实现竞争性能。然而,不准确的误差估算可能导致其比IPS估计器更高的方差。更糟糕的是,现有方法通常使用简单的模型 - 不可知方法来估计归纳错误,这不足以近似于近似于动态改变的模型相关目标(即预测模型的梯度方向)。为了解决这些问题,我们首先导出DR估算器的偏差和方差。基于它,已经提出了一种更强大的双重稳健(MRDR)估计器,以进一步降低其差异,同时保持其双重稳健性。此外,我们为MRDR估算器提出了一种新的双重学习方法,可以将误差归纳转换为一般的CVR估计。此外,我们经验验证所提出的学习方案可以进一步消除估算学习的高方差问题。为了评估其有效性,在半合成数据集和两个现实世界数据集上进行了广泛的实验。结果证明了所提出的方法的优越性在最先进的方法中。代码可在https://github.com/guosyjlu/mrdr-dl上获得。
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译
In robust Markov decision processes (MDPs), the uncertainty in the transition kernel is addressed by finding a policy that optimizes the worst-case performance over an uncertainty set of MDPs. While much of the literature has focused on discounted MDPs, robust average-reward MDPs remain largely unexplored. In this paper, we focus on robust average-reward MDPs, where the goal is to find a policy that optimizes the worst-case average reward over an uncertainty set. We first take an approach that approximates average-reward MDPs using discounted MDPs. We prove that the robust discounted value function converges to the robust average-reward as the discount factor $\gamma$ goes to $1$, and moreover, when $\gamma$ is large, any optimal policy of the robust discounted MDP is also an optimal policy of the robust average-reward. We further design a robust dynamic programming approach, and theoretically characterize its convergence to the optimum. Then, we investigate robust average-reward MDPs directly without using discounted MDPs as an intermediate step. We derive the robust Bellman equation for robust average-reward MDPs, prove that the optimal policy can be derived from its solution, and further design a robust relative value iteration algorithm that provably finds its solution, or equivalently, the optimal robust policy.
translated by 谷歌翻译
Medical image segmentation (MIS) is essential for supporting disease diagnosis and treatment effect assessment. Despite considerable advances in artificial intelligence (AI) for MIS, clinicians remain skeptical of its utility, maintaining low confidence in such black box systems, with this problem being exacerbated by low generalization for out-of-distribution (OOD) data. To move towards effective clinical utilization, we propose a foundation model named EvidenceCap, which makes the box transparent in a quantifiable way by uncertainty estimation. EvidenceCap not only makes AI visible in regions of uncertainty and OOD data, but also enhances the reliability, robustness, and computational efficiency of MIS. Uncertainty is modeled explicitly through subjective logic theory to gather strong evidence from features. We show the effectiveness of EvidenceCap in three segmentation datasets and apply it to the clinic. Our work sheds light on clinical safe applications and explainable AI, and can contribute towards trustworthiness in the medical domain.
translated by 谷歌翻译
Vertical Federated Learning (VFL) is widely utilized in real-world applications to enable collaborative learning while protecting data privacy and safety. However, previous works show that parties without labels (passive parties) in VFL can infer the sensitive label information owned by the party with labels (active party) or execute backdoor attacks to VFL. Meanwhile, active party can also infer sensitive feature information from passive party. All these pose new privacy and security challenges to VFL systems. We propose a new general defense method which limits the mutual information between private raw data, including both features and labels, and intermediate outputs to achieve a better trade-off between model utility and privacy. We term this defense Mutual Information Regularization Defense (MID). We theoretically and experimentally testify the effectiveness of our MID method in defending existing attacks in VFL, including label inference attacks, backdoor attacks and feature reconstruction attacks.
translated by 谷歌翻译
Video semantic segmentation (VSS) is beneficial for dealing with dynamic scenes due to the continuous property of the real-world environment. On the one hand, some methods alleviate the predicted inconsistent problem between continuous frames. On the other hand, other methods employ the previous frame as the prior information to assist in segmenting the current frame. Although the previous methods achieve superior performances on the independent and identically distributed (i.i.d) data, they can not generalize well on other unseen domains. Thus, we explore a new task, the video generalizable semantic segmentation (VGSS) task that considers both continuous frames and domain generalization. In this paper, we propose a class-wise non-salient region generalized (CNSG) framework for the VGSS task. Concretely, we first define the class-wise non-salient feature, which describes features of the class-wise non-salient region that carry more generalizable information. Then, we propose a class-wise non-salient feature reasoning strategy to select and enhance the most generalized channels adaptively. Finally, we propose an inter-frame non-salient centroid alignment loss to alleviate the predicted inconsistent problem in the VGSS task. We also extend our video-based framework to the image-based generalizable semantic segmentation (IGSS) task. Experiments demonstrate that our CNSG framework yields significant improvement in the VGSS and IGSS tasks.
translated by 谷歌翻译
The stock market prediction has been a traditional yet complex problem researched within diverse research areas and application domains due to its non-linear, highly volatile and complex nature. Existing surveys on stock market prediction often focus on traditional machine learning methods instead of deep learning methods. Deep learning has dominated many domains, gained much success and popularity in recent years in stock market prediction. This motivates us to provide a structured and comprehensive overview of the research on stock market prediction focusing on deep learning techniques. We present four elaborated subtasks of stock market prediction and propose a novel taxonomy to summarize the state-of-the-art models based on deep neural networks from 2011 to 2022. In addition, we also provide detailed statistics on the datasets and evaluation metrics commonly used in the stock market. Finally, we highlight some open issues and point out several future directions by sharing some new perspectives on stock market prediction.
translated by 谷歌翻译
We present X-Decoder, a generalized decoding model that can predict pixel-level segmentation and language tokens seamlessly. X-Decodert takes as input two types of queries: (i) generic non-semantic queries and (ii) semantic queries induced from text inputs, to decode different pixel-level and token-level outputs in the same semantic space. With such a novel design, X-Decoder is the first work that provides a unified way to support all types of image segmentation and a variety of vision-language (VL) tasks. Further, our design enables seamless interactions across tasks at different granularities and brings mutual benefits by learning a common and rich pixel-level visual-semantic understanding space, without any pseudo-labeling. After pretraining on a mixed set of a limited amount of segmentation data and millions of image-text pairs, X-Decoder exhibits strong transferability to a wide range of downstream tasks in both zero-shot and finetuning settings. Notably, it achieves (1) state-of-the-art results on open-vocabulary segmentation and referring segmentation on eight datasets; (2) better or competitive finetuned performance to other generalist and specialist models on segmentation and VL tasks; and (3) flexibility for efficient finetuning and novel task composition (e.g., referring captioning and image editing). Code, demo, video, and visualization are available at https://x-decoder-vl.github.io.
translated by 谷歌翻译